A Novel Chaotic Neural Network Using Memristive Synapse with Applications in Associative Memory

نویسندگان

  • Xiaofang Hu
  • Shukai Duan
  • Lidan Wang
  • Chuandong Li
چکیده

and Applied Analysis 3 2.1. CNN Based on Aihara’s Model The dynamical equation of a chaotic neuron model with continuous functions described by 1 , x t 1 f ( A t − α t ∑ d 0 kg x t − d − θ ) , 2.1 where x t 1 is the output of the chaotic neuron at the discrete time t 1, which is an analogue value between 0 and 1; f is a continuous output function, such as logistic function f y 1/ 1 e−y/ε with the steepness parameter ε; g is a function describing the relationship between the analog output and the refractoriness which can be defined as g x x; A t is the external stimulation; α, k, and θ are positive refractory scaling parameter, the damping factor, and the threshold, respectively. If defining the internal state y t 1 as y t 1 A t − α t ∑ d 0 kg x t − d − θ 2.2 we can reduce 2.1 to the following y t 1 ky t − αg(f(y t )) a t , x t 1 f ( y t 1 ) , 2.3 where a t is called bifurcation parameter and defined by a t A t − kA t − 1 − θ 1 − k . 2.4 The neuron model with chaotic dynamics described above can be generalized as an element of neural network called chaotic neural network CNN . The dynamics of the ith chaotic neuron with spatiotemporal summation of feedback inputs and externally applied inputs in a CNN composed of M chaotic neurons and N externally applied inputs can be modeled as xi t 1 f ( ξi t 1 ηi t 1 ζi t 1 ) , 2.5 4 Abstract and Applied Analysis where the external inputs ξi t 1 , the feedback inputs ηi t 1 , and the refractoriness ζi t 1 are defined as 2.6 – 2.8 , respectively. ξi t 1 N ∑ j 1 VijAj t keξi t N ∑ j 1 Vij t ∑ d 0 k eAj t − d , 2.6 ηi t 1 M ∑ j 1 Wijxj t kfηi t M ∑ j 1 Wij t ∑ d 0 k fxj t − d , 2.7 ζi t 1 −αg{xi t } krζi t − θi −α t ∑ d 0 k r g{xi t − d }, 2.8 where Vij serves as the synaptic weight between the external inputs and the neurons. Similarly,Wij indicates the synaptic weights between two neurons and is trained by Hebbian learning algorithm, such as, Wij M ∑ k 1 ( x i x k j ) or Wij 1 M M ∑ k 1 ( 2x i − 1 )( 2x j − 1 ) . 2.9 2.2. Dynamical Behaviors Analysis The firing rate of a neuron is a fundamental characteristic of the message that conveys to other neurons. It is variable and denotes the intensity of its activation state. As such, it ranges from near zero to some certain maximum depending on its need to convey a particular level of activation. Traditionally, it has been thought that most of the relevant information was contained in the average firing rate of the neuron. The firing rate is usually defined by a temporal average meaning the spikes that occur in a given time window. Division by the length of the time window gives the average firing rate 30 . Here, the average firing rate or excitation number of a single chaotic neuron depicted in 2.3 is defined as ρ lim n→ ∞ 1 n n−1 ∑ t 0 h x t , 2.10 where h is a transfer function which represents waveform-shaping dynamics of the axon with a strict threshold for propagation of action potentials initiated at the trigger zone and assumed to be h x 1 for x ≥ 0.5 and h x 0 for x < 0.5. By adjusting the bifurcation parameter a t from 0 to 1, when the other parameters are set as k 0.7, α 1.0, ε 0.02, y 0 0.5, and the average firing rate is shown in Figure 1. It is a characteristic of chaotic systems that initially nearby trajectories separate exponentially with time. The Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the Maximal Lyapunov exponent MLE , because it determines a notion of predictability for a dynamical Abstract and Applied Analysis 5and Applied Analysis 5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 A ve ra ge fi ri ng r at e a Figure 1: The average firing rate of the chaotic neuron. 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

AN IMPROVED CONTROLLED CHAOTIC NEURAL NETWORK FOR PATTERN RECOGNITION

A sigmoid function is necessary for creation a chaotic neural network (CNN). In this paper, a new function for CNN is proposed that it can increase the speed of convergence. In the proposed method, we use a novel signal for controlling chaos. Both the theory analysis and computer simulation results show that the performance of CNN can be improved remarkably by using our method. By means of this...

متن کامل

Adaptive Synchronization of Memristor-based Chaotic Neural Systems

Chaotic neural networks consisting of a great number of chaotic neurons are able to reproduce the rich dynamics observed in biological nervous systems. In recent years, the memristor has attracted much interest in the efficient implementation of artificial synapses and neurons. This work addresses adaptive synchronization of a class of memristor-based neural chaotic systems using a novel adapti...

متن کامل

Experimental demonstration of associative memory with memristive neural networks

Synapses are essential elements for computation and information storage in both real and artificial neural systems. An artificial synapse needs to remember its past dynamical history, store a continuous set of states, and be "plastic" according to the pre-synaptic and post-synaptic neuronal activity. Here we show that all this can be accomplished by a memory-resistor (memristor for short). In p...

متن کامل

Pavlov's Dog Associative Learning Demonstrated on Synaptic-Like Organic Transistors

In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low-power write operations for the learning and impleme...

متن کامل

A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN). In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN) which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple faul...

متن کامل

Targeting Spatio–Temporal Patterns in Chaotic Neural Network

We have studied the chaos control in chaotic neural network by limiting the phase space at varying time interval. It provides a controlled output patterns with different temporal periods depending upon the control parameters. The chaotic neural network constructed with chaotic neurons exhibits very rich dynamic behavior with a non–periodic associative memory. In the chaotic neural network, it i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012